CONNECT 2400
. Now your computer was bridged to the other; anything going out your serial port was encoded as sound by your modem and decoded at the other end, and vice-versa.
But what, exactly, was the other end?
It might have been another person at their computer. Turn on local echo, and you can see what they did. Maybe you d send files to each other. But in my case, the answer was different: PC Magazine.
71510,1421
. CompuServe had forums, and files. Eventually I would use TapCIS to queue up things I wanted to do offline, to minimize phone usage online.
CompuServe eventually added a gateway to the Internet. For the sum of somewhere around $1 a message, you could send or receive an email from someone with an Internet email address! I remember the thrill of one time, as a kid of probably 11 years, sending a message to one of the editors of PC Magazine and getting a kind, if brief, reply back!
But inevitably I had
complete.org
, as well. At the time, the process was a bit lengthy and involved downloading a text file form, filling it out in a precise way, sending it to InterNIC, and probably mailing them a check. Well I did that, and in September of 1995, complete.org
became mine. I set up sendmail
on my local system, as well as INN
to handle the limited Usenet newsfeed I requested from the ISP. I even ran Majordomo to host some mailing lists, including some that were surprisingly high-traffic for a few-times-a-day long-distance modem UUCP link!
The modem client programs for FreeBSD were somewhat less advanced than for OS/2, but I believe I wound up using Minicom or Seyon to continue to dial out to BBSs and, I believe, continue to use Learning Link. So all the while I was setting up my local BBS, I continued to have access to the text Internet, consisting of chiefly Gopher for me.
Under Canadian Radio-television and Telecommunications Commission (CRTC) rules in place since 2017, telecom networks are supposed to ensure that cellphones are able to contact 911 even if they do not have service.I could personally confirm that my phone couldn't reach 911 services, because all calls would fail: the problem was that towers were still up, so your phone wouldn't fall back to alternative service providers (which could have resolved the issue). I can only speculate as to why Rogers didn't take cell phone towers out of the network to let phones work properly for 911 service, but it seems like a dangerous game to play. Hilariously, the CRTC itself didn't have a reliable phone service due to the service outage:
Please note that our phone lines are affected by the Rogers network outage. Our website is still available: https://crtc.gc.ca/eng/contact/https://mobile.twitter.com/CRTCeng/status/1545421218534359041 I wonder if they will file a complaint against Rogers themselves about this. I probably should. It seems the federal government is thinking more of the same medicine will fix the problem and has told companies should "help" each other in an emergency. I doubt this will fix anything, and could actually make things worse if the competitors actually interoperate more, as it could cause multi-provider, cascading failures.
He pulls from Ethan Zuckerman s idea of a web that is plural in purpose that just as pool halls, libraries and churches each have different norms, purposes and designs, so too should different places on the internet. To achieve this, Tarnoff wants governments to pass laws that would make the big platforms unprofitable and, in their place, fund small-scale, local experiments in social media design. Instead of having platforms ruled by engagement-maximizing algorithms, Tarnoff imagines public platforms run by local librarians that include content from public media.(Links mine: the Washington Post obviously prefers to not link to the real web, and instead doesn't link to Zuckerman's site all and suggests Amazon for the book, in a cynical example.) And in another example of how the private sector has failed us, there was recently a fluke in the AMBER alert system where the entire province was warned about a loose shooter in Saint-Elz ar except the people in the town, because they have spotty cell phone coverage. In other words, millions of people received a strongly toned, "life-threatening", alert for a city sometimes hours away, except the people most vulnerable to the alert. Not missing a beat, the CAQ party is promising more of the same medicine again and giving more money to telcos to fix the problem, suggesting to spend three billion dollars in private infrastructure.
curl -I https://website.org/
and it hung.
Wrong assumption: Something is wrong with nginx. Why else would it just hang?
Reconsidered conclusion: The resource (the home page) is a MISS, so nginx has
to retrieve it from the origin, but the origin is over-loaded and timing out,
so my request is also timing out. Maybe something is wrong with the nginx
caching configuration since the home page really should be a HIT but that s
another problem.
Action: I changed the configuration from our normal caching set of
directives to our aggressive caching set of directives, reloaded nginx and
curl -I https://website.org/
still hung.
Wrong assumption: aggressive caching isn t working and I need a different
configuration.
Reconsidered conclusion: The home page still has failed to be loaded from the
origin, so every request for it is going to be a MISS, and is going to hang,
until nginx is able to fill the cache with it. The configuration change might
be the right change; we just need the origin to calm down before we will know.
Action: I restarted PHP on the origin to free up PHP processes so my home
page request can fill the cache and still curl -I https://website.org/
hangs.
Wrong assumption: WTF! The world is ending!
Reconsidered conclusion: The regular traffic which is accessing other pages
(not the home page) consumed all the available PHP processes on the origin
before my request for the home page could complete, so nginx is still unable
to fill the cache with the home page.
Action: Once we got things under control, I changed the caching level from
aggressive back down to normal. I ran curl -I https://website.org/
and it was
HIT ing. I concluded that we don t need the aggressive cache after all. Got
some coffee, came back later and ran it again and it consistently showed MISS.
Wrong assumption: What?!? Did something change on the origin to stop the
cache from working??
Reconsidered conclusion: The aggressive cache set the cache for 5 minutes.
Even after changing to normal caching, the home page was still cached so it
was served from the cache. After 5 minutes, the cache expired. Now, the normal
cache setting are in play to determine whether the request would be cached or
not. In other words, you have to wait for the cache to expire (or bust the
cache) before you can effectively know if the new cache settings are working.
Publisher: | Oxford University Press |
Copyright: | November 2019 |
ISBN: | 0-19-027128-0 |
Format: | Kindle |
Pages: | 232 |
The American moral and social philosopher Eric Hoffer reportedly said that every great cause begins as a movement, becomes a business, and eventually degenerates into a racket. The reform movement to make healthcare safer is clearly a great cause, but patient safety efforts are increasingly following Hoffer's path.Robert Wears was Professor of Emergency Medicine at the University of Florida specializing in patient safety. Kathleen Sutcliffe is Professor of Medicine and Business at Johns Hopkins. This book is based on research funded by a grant from the Robert Wood Johnson Foundation, for which both Wears and Sutcliffe were primary investigators. (Wears died in 2017, but the acknowledgments imply that at least early drafts of the book existed by that point and it was indeed co-written.) The anchor of the story of patient safety in Still Not Safe is the 1999 report from the Institute of Medicine entitled To Err is Human, to which the authors attribute an explosion of public scrutiny of medical safety. The headline conclusion of that report, which led nightly news programs after its release, was that 44,000 to 120,000 people died each year in the United States due to medical error. This report prompted government legislation, funding for new safety initiatives, a flurry of follow-on reports, and significant public awareness of medical harm. What it did not produce, in the authors' view, is significant improvements in patient safety. The central topic of this book is an analysis of why patient safety efforts have had so little measurable effect. The authors attribute this to three primary causes: an unwillingness to involve safety experts from outside medicine or absorb safety lessons from other disciplines, an obsession with human error that led to profound misunderstandings of the nature of safety, and the misuse of safety concerns as a means to centralize control of medical practice in the hands of physician-administrators. (The term used by the authors is "managerial, scientific-bureaucratic medicine," which is technically accurate but rather awkward.) Biggest complaint first: This book desperately needed examples, case studies, or something to make these ideas concrete. There are essentially none in 230 pages apart from passing mentions of famous cases of medical error that added to public pressure, and a tantalizing but maddeningly nonspecific discussion of the atypically successful effort to radically improve the safety of anesthesia. Apparently anesthesiologists involved safety experts from outside medicine, avoided a focus on human error, turned safety into an engineering problem, and made concrete improvements that had a hugely positive impact on the number of adverse events for patients. Sounds fascinating! Alas, I'm just as much in the dark about what those improvements were as I was when I started reading this book. Apart from a vague mention of some unspecified improvements to anesthesia machines, there are no concrete descriptions whatsoever. I understand that the authors were probably leery of giving too many specific examples of successful safety initiatives since one of their core points is that safety is a mindset and philosophy rather than a replicable set of actions, and copying the actions of another field without understanding their underlying motivations or context within a larger system is doomed to failure. But you have to give the reader something, or the book starts feeling like a flurry of abstract assertions. Much is made here of the drawbacks of a focus on human error, and the superiority of the safety analysis done in other fields that have moved beyond error-centric analysis (and in some cases have largely discarded the word "error" as inherently unhelpful and ambiguous). That leads naturally to showing an analysis of an adverse incident through an error lens and then through a more nuanced safety lens, making the differences concrete for the reader. It was maddening to me that the authors never did this. This book was recommended to me as part of a discussion about safety and reliability in tech and the need to learn from safety practices in other fields. In that context, I didn't find it useful, although surprisingly that's because the thinking in medicine (at least as presented by these authors) seems behind the current thinking in distributed systems. The idea that human error is not a useful model for approaching reliability is standard in large tech companies, nearly all of which use blameless postmortems for exactly that reason. Tech, similar to medicine, does have a tendency to be insular and not look outside the field for good ideas, but the approach to large-scale reliability in tech seems to have avoided the other traps discussed here. (Security is another matter, but security is also adversarial, which creates different problems that I suspect require different tools.) What I did find fascinating in this book, although not directly applicable to my own work, is the way in which a focus on human error becomes a justification for bureaucratic control and therefore a concentration of power in a managerial layer. If the assumption is that medical harm is primarily caused by humans making avoidable mistakes, and therefore the solution is to prevent humans from making mistakes through better training, discipline, or process, this creates organizations that are divided into those who make the rules and those who follow the rules. The long-term result is a practice of medicine in which a small number of experts decide the correct treatment for a given problem, and then all other practitioners are expected to precisely follow that treatment plan to avoid "errors." (The best distributed systems approaches may avoid this problem, but this failure mode seems nearly universal in technical support organizations.) I was startled by how accurate that portrayal of medicine felt. My assumption prior to reading this book was that the modern experience of medicine as an assembly line with patients as widgets was caused by the pressure for higher "productivity" and thus shorter visit times, combined with (in the US) the distorting effects of our broken medical insurance system. After reading this book, I've added a misguided way of thinking about medical error and risk avoidance to that analysis. One of the authors' points (which, as usual, I wish they'd made more concrete with a case study) is that the same thought process that lets a doctor make a correct diagnosis and find a working treatment is the thought process that may lead to an incorrect diagnosis or treatment. There is not a separable state of "mental error" that can be eliminated. Decision-making processes are more complicated and more integrated than that. If you try to prevent "errors" by eliminating flexibility, you also eliminate vital tools for successfully treating patients. The authors are careful to point out that the prior state of medicine in which each doctor was a force to themselves and there was no role for patient safety as a discipline was also bad for safety. Reverting to the state of medicine before the advent of the scientific-bureaucratic error-avoiding culture is also not a solution. But, rather at odds with other popular books about medicine, the authors are highly critical of safety changes focused on human error prevention, such as mandatory checklists. In their view, this is exactly the sort of attempt to blindly copy the machinery of safety in another field (in this case, air travel) without understanding the underlying purpose and system of which it's a part. I am not qualified to judge the sharp dispute over whether there is solid clinical evidence that checklists are helpful (these authors claim there is not; I know other books make different claims, and I suspect it may depend heavily on how the checklist is used). But I found the authors' argument that one has to design systems holistically for safety, not try to patch in safety later by turning certain tasks into rote processes and humans into machines, to be persuasive. I'm not willing to recommend this book given how devoid it is of concrete examples. I was able to fill in some of that because of prior experience with the literature on site reliability engineering, but a reader who wasn't previously familiar with discussions of safety or reliability may find much of this book too abstract to be comprehensible. But I'm not sorry I read it. I hadn't previously thought about the power dynamics of a focus on error, and I think that will be a valuable observation to keep in mind. Rating: 6 out of 10
total used free shared buff/cache available
Mem: 32717924 3101156 26950016 143608 2666752 29011928
Swap: 1000444 0 1000444
/proc/meminfo
which are scaled and presented to the user. A good example of a simple stat is total, which is just the MemTotal row located in that file. For the rest of this post, I ll make the rows from /proc/meminfo
have an amber background.
What is Free, and what is Used?
While you could say that the free value is also merely the MemFree row, this is where Linux memory statistics start to get odd. While that value is indeed what is found for MemFree and not a calculated field, it can be misleading.
Most people would assume that Free means free to use, with the implication that only this amount of memory is free to use and nothing more. That would also mean the used value is really used by something and nothing else can use it.
In the early days of free and Linux statistics in general that was how it looked. Used is a calculated field (there is no MemUsed row) and was, initially, Total - Free
.
The problem was, Used also included Buffers and Cached values. This meant that it looked like Linux was using a lot of memory for something. If you read old messages before 2002 that are talking about excessive memory use, they quite likely are looking at the values printed by free.
The thing was, under memory pressure the kernel could release Buffers and Cached for use. Not all of the storage but some of it so it wasn t all used. To counter this, free showed a row between Memory and Swap with Used having Buffers and Cached removed and Free having the same values added:
total used free shared buffers cached
Mem: 32717924 6063648 26654276 0 313552 2234436
-/+ buffers/cache: 3515660 29202264
Swap: 1000444 0 1000444
Cached + Slab
while Used was calculated as Total - Free - main_cache - Buffers
. This was very close to what the Used column in the +/- line used to show.
What s on the Slab?
The next issue that came across was the use of slabs. At this point, main_cache was Cached + Slab
, but Slab consists of reclaimable and unreclaimable components. One part of Slab can be used elsewhere if needed and the other cannot but the procps tools treated them the same. The Used calculation should not subtract SUnreclaim from the Total because it is actually being used.
So in 2015 main_cache was changed to be Cached + SReclaimable
. This meant that Used memory was calculated as Total - Free - Cached - SReclaimable - Buffers
.
Revenge of tmpfs and the return of Available
The tmpfs impacting Cached was still an issue. If you added a 10MB file into a tmpfs partition, then Free would reduce by 10MB and Cached would increase by 10MB meaning Used stayed unchanged even though 10MB had gone somewhere.
It was time to retire the complex calculation of Used. For procps 4.0.1 onwards, Used now means not available . We take the Total memory and subtract the Available memory. This is not a perfect setup but it is probably going to be the best one we have and testing is giving us much more sensible results. It s also easier for people to understand (take the total value you see in free, then subtract the available value).
What does that mean for main_cache which is part of the buff/cache value you see? As this value is no longer in the used memory calculation, it is less important. Should it also be reverted to simply Cached without the reclaimable Slabs?
The calculated fields
In summary, what this means for the calculated fields in procps at least is:
Total - Available
, unless Available is not present then it s Total FreeCached + Reclaimable Slabs
Total - Free
(no change here)/proc/meminfo
which is straight from the kernel.
Publisher: | Berkley |
Copyright: | September 2019 |
ISBN: | 1-9848-0259-3 |
Format: | Kindle |
Pages: | 372 |
xournal
wasn't
great, but now that I've tried it in xournalpp
(more on this below), I think I
will be enabling it in the future. The result on paper is also more consistent,
but I trust my skills will improve over time.
Use case
The first use case I have for the tablet is grading papers. I've been asking my
students to submit their papers via Moodle for a few semesters already, but
until now, I was grading them using PDF comments. The experience wasn't
great3 and was rather slow compared to grading physical copies.
I'm also a somewhat old-school teacher: I refuse to teach using slides. Death
by PowerPoint is real. I write on the blackboard a lot4 and I find
it much easier to prepare my notes by hand than by typing them, as the end
result is closer to what I actually end up writing down on the board.
Writing notes by hand on sheets of paper is a chore too, especially when you
revisit the same materiel regularly. Being able to handwrite digital notes gives
me a lot more flexibility and it's been great.
So far, I have been using xournal
to write notes and grade papers, and
although it is OK, it has a bunch of quirks I dislike. I was waiting for
xournalpp
to be packaged in Debian, and it now is5! I'm looking
forward to using it next semester.
Towards a better computer monitor
I have also been feeling the age of my current computer monitor. I am currently
using an old 32" 1080p TV from LG and up until now, I had been able to
deal with the drawbacks. The colors are pretty bad and 1080p for such a large
display isn't great, but I got used to it.
What I really noticed when I started using my graphics tablet was the input
lag. It's bad enough that there's a clear jello effect when writing and it
eventually gives me a headache. It's so bad I usually prefer to work on my
laptop, which has a nicer but noticeably smaller panel.
I'm currently looking to replace this aging TV6 by something more modern.
I have been waiting out since I would like to buy something that will last me
another 10 years if possible. Sadly, 32" high refresh rate 4K monitors aren't
exactly there yet and I haven't found anything matching my criteria. I would
probably also need a new GPU, something that is not easy to come by these days.
Publisher: | Penguin Books |
Copyright: | 2018 |
Printing: | 2019 |
ISBN: | 0-525-55880-2 |
Format: | Kindle |
Pages: | 615 |
In Europe, the bullish CEOs of Deutsche Bank and Barclays claimed exceptional status because they avoided taking aid from their national governments. What the Fed data reveal is the hollowness of those boasts. The banks might have avoided state-sponsored recapitalization, but every major bank in the entire world was taking liquidity assistance on a grand scale from its local central bank, and either directly or indirectly by way of the swap lines from the Fed.The emergency steps taken by Timothy Geithner in the Treasury Department were nearly as dramatic as those of the Federal Reserve. Without regard for borders, and pushing the boundary of their legal authority, they intervened massively in the world (not just the US) economy to save the banking and international finance system. And it worked. One of the benefits of a good history is to turn stories about heroes and villains into more nuanced information about motives and philosophies. I came away from Sheila Bair's account of the crisis furious at Geithner's protection of banks from any meaningful consequences for their greed. Tooze's account, and analysis, agrees with Bair in many respects, but Bair was continuing a personal fight and Tooze has more space to put Geithner into context. That context tells an interesting story about the shape of political economics in the 21st century. Tooze identifies Geithner as an institutionalist. His goal was to keep the system running, and he was acutely aware of what would happen if it failed. He therefore focused on the pragmatic and the practical: the financial system was about to collapse, he did whatever was necessary to keep it working, and that effort was successful. Fairness, fault, and morals were treated as irrelevant. This becomes more obvious when contrasted with the eurozone crisis, which started with a Greek debt crisis in the wake of the recession triggered by the 2008 crisis. Greece is tiny by the standards of the European economy, so at first glance there is no obvious reason why its debt crisis should have perturbed the financial system. Under normal circumstances, its lenders should have been able to absorb such relatively modest losses. But the immediate aftermath of the 2008 crisis was not normal circumstances, particularly in Europe. The United States had moved aggressively to recapitalize its banks using the threat of compensation caps and government review of their decisions. The European Union had not; European countries had done very little, and their banks were still in a fragile state. Worse, the European Central Bank had sent signals that the market interpreted as guaranteeing the safety of all European sovereign debt equally, even though this was explicitly ruled out by the Lisbon Treaty. If Greece defaulted on its debt, not only would that be another shock to already-precarious banks, it would indicate to the market that all European debt was not equal and other countries may also be allowed to default. As the shape of the Greek crisis became clearer, the cost of borrowing for all of the economically weaker European countries began rising towards unsustainable levels. In contrast to the approach taken by the United States government, though, Europe took a moralistic approach to the crisis. Jean-Claude Trichet, then president of the European Central Bank, held the absolute position that defaulting on or renegotiating the Greek debt was unthinkable and would not be permitted, even though there was no realistic possibility that Greece would be able to repay. He also took a conservative hard line on the role of the ECB, arguing that it could not assist in this crisis. (Tooze is absolutely scathing towards Trichet, who comes off in this account as rigidly inflexible, volatile, and completely irrational.) Germany's position, represented by Angela Merkel, was far more realistic: Greece's debt should be renegotiated and the creditors would have to accept losses. This is, in Tooze's account, clearly correct, and indeed is what eventually happened. But the problem with Merkel's position was the potential fallout. The German government was still in denial about the health of its own banks, and political opinion, particularly in Merkel's coalition, was strongly opposed to making German taxpayers responsible for other people's debts. Stopping the progression of a Greek default to a loss of confidence in other European countries would require backstopping European sovereign debt, and Merkel was not willing to support this. Tooze is similarly scathing towards Merkel, but I'm not sure it's warranted by his own account. She seemed, even in his account, boxed in by domestic politics and the tight constraints of the European political structure. Regardless, even after Trichet's term ended and he was replaced by the far more pragmatic Mario Draghi, Germany and Merkel continued to block effective action to relieve Greece's debt burden. As a result, the crisis lurched from inadequate stopgap to inadequate stopgap, forcing crippling austerity, deep depressions, and continued market instability while pretending unsustainable debt would magically become payable through sufficient tax increases and spending cuts. US officials such as Geithner, who put morals and arguably legality aside to do whatever was needed to save the system, were aghast. One takeaway from this is that expansionary austerity is the single worst macroeconomic idea that anyone has ever had.
In the summer of 2012 [the IMF's] staff revisited the forecasts they had made in the spring of 2010 as the eurozone crisis began and discovered that they had systematically underestimated the negative impact of budget cuts. Whereas they had started the crisis believing that the multiplier was on average around 0.5, they now concluded that from 2010 forward it had been in excess of 1. This meant that cutting government spending by 1 euro, as the austerity programs demanded, would reduce economic activity by more than 1 euro. So the share of the state in economic activity actually increased rather than decreased, as the programs presupposed. It was a staggering admission. Bad economics and faulty empirical assumptions had led the IMF to advocate a policy that destroyed the economic prospects for a generation of young people in Southern Europe.Another takeaway, though, is central to Tooze's point in the final section of the book: the institutionalists in the United States won the war on financial collapse via massive state interventions to support banks and the financial system, a model that Europe grudgingly had to follow when attempting to reject it caused vast suffering while still failing to stabilize the financial system. But both did so via actions that were profoundly and obviously unfair, and only questionably legal. Bankers suffered few consequences for their greed and systematic mismanagement, taking home their normal round of bonuses while millions of people lost their homes and unemployment rates for young men in some European countries exceeded 50%. In Europe, the troika's political pressure against Greece and Italy was profoundly anti-democratic. The financial elite achieved their goal of saving the financial system. It could have failed, that failure would have been catastrophic, and their actions are defensible on pragmatic grounds. But they completely abandoned the moral high ground in the process. The political forces opposed to centrist neoliberalism attempted to step into that moral gap. On the Left, that came in the form of mass protest movements, Occupy Wall Street, Bernie Sanders, and parties such as Syriza in Greece. The Left, broadly, took the moral side of debtors, holding that the primary pain of the crisis should instead be born by the wealthy creditors who were more able to absorb it. The Right by contrast, in the form of the Tea Party movement inside the Republican Party in the United States and the nationalist parties in Europe, broadly blamed debtors for taking on excessive debt and focused their opposition on use of taxpayer dollars to bail out investment banks and other institutions of the rich. Tooze correctly points out that the Right's embrace of racist nationalism and incoherent demagoguery obscures the fact that their criticism of the elite center has real merit and is partly shared by the Left. As Tooze sketches out, the elite centrist consensus held in most of Europe, beating back challenges from both the Left and the Right, although it faltered in the UK, Poland, and Hungary. In the United States, the Democratic Party similarly solidified around neoliberalism and saw off its challenges from the Left. The Republican Party, however, essentially abandoned the centrist position, embracing the Right. That left the Democratic Party as the sole remaining neoliberal institutionalist party, supplemented by a handful of embattled Republican centrists. Wall Street and its money swung to the Democratic Party, but it was deeply unpopular on both the Left and the Right and this shift may have hurt them more than helped. The Democrats, by not abandoning the center, bore the brunt of the residual anger over the bank bailout and subsequent deep recession. Tooze sees in that part of the explanation for Trump's electoral victory over Hilary Clinton. This review is already much too long, and I haven't even mentioned Tooze's clear explanation of the centrality of treasury bonds to world finances, or his discussions of Russian and Ukraine, China, or Brexit, all of which I thought were excellent. This is not only an comprehensive history of both of the crises and international politics of the time period. It is also a thought-provoking look at how drastic of interventions are required to keep the supposed free market working, who is left to suffer after those interventions, and the political consequences of the choice to prioritize the stability of a deeply inequitable and unsafe financial system. At least in the United States, there is now a major political party that is likely to oppose even mundane international financial institutions, let alone another major intervention. The neoliberal center is profoundly weakened. But nothing has been done to untangle the international financial system, and little has been done to reduce its risk. The world will go into the next financial challenge still suffering from a legitimacy crisis. Given the miserly, condescending, and dismissive treatment of the suffering general populace after moving heaven and earth to save the banking system, that legitimacy crisis is arguably justified, but an uncontrolled crash of the financial system is not likely to be any kinder to the average citizen than it is to the investment bankers. Crashed is not the best-written book at a sentence-by-sentence level. Tooze's prose is choppy and a bit awkward, and his paragraphs occasionally wander away from a clear point. But the content is excellent and thought-provoking, filling in large sections of the crisis picture that I had not previously been aware of and making a persuasive argument for its continuing effects on current politics. Recommended if you're not tired of reading about financial crises. Rating: 8 out of 10
Beethoven: A Life in Nine Pieces (2020) Laura Tunbridge Whilst it might immediately present itself as a clickbait conceit, organising an overarching narrative around just nine compositions by Beethoven turns out to be an elegant way of saying something fresh about this grizzled old bear. Some of Beethoven's most famous compositions are naturally included in the nine (eg. the Eroica and the Hammerklavier piano sonata), but the book raises itself above conventional Beethoven fare when it highlights, for instance, his Septet, Op. 20, an early work that is virtually nobody's favourite Beethoven piece today. The insight here is that it was widely popular in its time, played again and again around Vienna for the rest of his life. No doubt many contemporary authors can relate to this inability to escape being artistically haunted by an earlier runaway success. The easiest way to say something interesting about Beethoven in the twenty-first century is to talk about the myth of Beethoven instead. Or, as Tunbridge implies, perhaps that should really be 'Beethoven' in leaden quotation marks, given so much about what we think we know about the man is a quasi-fictional construction. Take Anton Schindler, Beethoven's first biographer and occasional amanuensis, who destroyed and fabricated details about Beethoven's life, casting himself in a favourable light and exaggerating his influence with the composer. Only a few decades later, the idea of a 'heroic' German was to be politically useful as well; the Anglosphere often need reminding that Germany did not exist as a nation-state prior to 1871, so it should be unsurprising to us that the late nineteenth-century saw a determined attempt to create a uniquely 'German' culture ex nihilo. (And the less we say about Immortal Beloved the better, even though I treasure that film.) Nevertheless, Tunbridge cuts through Beethoven's substantial legacy using surgical precision that not only avoids feeling like it is settling a score, but it also does so in a way that is unlikely to completely alienate anyone emotionally dedicated to some already-established idea of the man to bring forth the tediously predictable sentiment that Beethoven has 'gone woke'. With Alex Ross on the cult of Wagner, it seems that books about the 'myth of X' are somewhat in vogue right now. And this pattern within classical music might fit into some broader trend of deconstruction in popular non-fiction too, especially when we consider the numerous contemporary books on the long hangover of the Civil Rights era (Robin DiAngelo's White Fragility, etc.), the multifarious ghosts of Empire (Akala's Natives, Sathnam Sanghera's Empireland, etc.) or even the 'transmogrification' of George Orwell into myth. But regardless of its place in some wider canon, A Life in Nine Pieces is beautifully printed in hardback form (worth acquiring for that very reason alone), and it is one of the rare good books about classical music that can be recommended to both the connoisseur and the layperson alike.
Sea State (2021) Tabitha Lasley In her mid-30s and jerking herself out of a terrible relationship, Tabitha Lasley left London and put all her savings into a six-month lease on a flat within a questionable neighbourhood in Aberdeen, Scotland. She left to make good on a lukewarm idea for a book about oil rigs and the kinds of men who work on them: I wanted to see what men were like with no women around, she claims. The result is Sea State, a forthright examination of the life of North Sea oil riggers, and an unsparing portrayal of loneliness, masculinity, female desire and the decline of industry in Britain. (It might almost be said that Sea State is an update of a sort to George Orwell's visit to the mines in the North of England.) As bracing as the North Sea air, Sea State spoke to me on multiple levels but I found it additionally interesting to compare and contrast with Julian Barnes' The Man with Red Coat (see below). Women writers are rarely thought to be using fiction for higher purposes: it is assumed that, unlike men, whatever women commit to paper is confessional without any hint of artfulness. Indeed, it seems to me that the reaction against the decades-old genre of autofiction only really took hold when it became the domain of millennial women. (By contrast, as a 75-year-old male writer with a firmly established reputation in the literary establishment, Julian Barnes is allowed wide latitude in what he does with his sources and his writing can be imbued with supremely confident airs as a result.) Furthermore, women are rarely allowed metaphor or exaggeration for dramatic effect, and they certainly aren t permitted to emphasise darker parts in order to explore them... hence some of the transgressive gratification of reading Sea State. Sea State is admittedly not a work of autofiction, but the sense that you are reading about an author writing a book is pleasantly unavoidable throughout. It frequently returns to the topic of oil workers who live multiple lives, and Lasley admits to living two lives herself: she may be in love but she's also on assignment, and a lot of the pleasure in this candid and remarkably accessible book lies in the way these states become slowly inseparable.
Twilight of Democracy (2020) Anne Applebaum For the uninitiated, Anne Applebaum is a staff writer for The Atlantic magazine who won a Pulitzer-prize for her 2004 book on the Soviet Gulag system. Her latest book, however, Twilight of Democracy is part memoir and part political analysis and discusses the democratic decline and the rise of right-wing populism. This, according to Applebaum, displays distinctly authoritarian tendencies, and who am I to disagree? Applebaum does this through three main case studies (Poland, the United Kingdom and the United States), but the book also touches on Hungary as well. The strongest feature of this engaging book is that Appelbaum's analysis focuses on the intellectual classes and how they provide significant justification for a descent into authoritarianism. This is always an important point to be remembered, especially as much of the folk understanding of the rise of authoritarian regimes tends to place exaggerated responsibility on the ordinary and everyday citizen: the blame placed on the working-class in the Weimar Republic or the scorn heaped upon 'white trash' of the contemporary Rust Belt, for example. Applebaum is uniquely poised to discuss these intellectuals because, well, she actually knows a lot of them personally. Or at least, she used to know them. Indeed, the narrative of the book revolves around two parties she hosted, both in the same house in northwest Poland. The first party, on 31 December 1999, was attended by friends from around the Western world, but most of the guests were Poles from the broad anti-communist alliance. They all agreed about democracy, the rule of law and the route to prosperity whilst toasting in the new millennium. (I found it amusing to realise that War and Peace also starts with a party.) But nearly two decades later, many of the attendees have ended up as supporters of the problematic 'Law and Justice' party which currently governs the country. Applebaum would now cross the road to avoid them, and they would do the same to her, let alone behave themselves at a cordial reception. The result of this autobiographical detail is that by personalising the argument, Applebaum avoids the trap of making too much of high-minded abstract argument for 'democracy', and additionally makes her book compellingly spicy too. Yet the strongest part of this book is also its weakest. By individualising the argument, it often feels that Applebaum is settling a number of personal scores. She might be very well justified in doing this, but at times it feels like the reader has walked in halfway through some personal argument and is being asked to judge who is in the right. Furthermore, Applebaum's account of contemporary British politics sometimes deviates into the cartoonish: nothing was egregiously incorrect in any of her summations, but her explanation of the Brexit referendum result didn't read as completely sound. Nevertheless, this lively and entertaining book that can be read with profit, even if you disagree with significant portions of it, and its highly-personal approach makes it a refreshing change from similar contemporary political analysis (eg. David Runciman's How Democracy Ends) which reaches for that more 'objective' line.
The Man in the Red Coat (2019) Julian Barnes As rich as the eponymous red coat that adorns his cover, Julian Barnes quasi-biography of French gynaecologist Samuel-Jean Pozzi (1846 1918) is at once illuminating, perplexing and downright hilarious. Yet even that short description is rather misleading, for this book evades classification all manner number of ways. For instance, it is unclear that, with the biographer's narrative voice so obviously manifest, it is even a biography in the useful sense of the word. After all, doesn't the implied pact between author and reader require the biographer to at least pretend that they are hiding from the reader? Perhaps this is just what happens when an author of very fine fiction turns his hand to non-fiction history, and, if so, it represents a deeper incursion into enemy territory after his 1984 metafictional Flaubert's Parrot. Indeed, upon encountering an intriguing mystery in Pozzi's life crying out for a solution, Barnes baldly turns to the reader, winks and states: These matters could, of course, be solved in a novel. Well, quite. Perhaps Barnes' broader point is that, given that's impossible for the author to completely melt into air, why not simply put down your cards and have a bit of fun whilst you're at it? If there's any biography that makes the case for a rambling and lightly polemical treatment, then it is this one. Speaking of having fun, however, two qualities you do not expect in a typical biography is simply how witty they can be, as well as it having something of the whiff of the thriller about it. A bullet might be mentioned in an early chapter, but given the name and history of Monsieur Pozzi is not widely known, one is unlikely to learn how he lived his final years until the closing chapters. (Or what happened to that turtle.) Humour is primarily incorporated into the book in two main ways: first, by explicitly citing the various wits of the day ( What is a vice? Merely a taste you don t share. etc.), but perhaps more powerful is the gentle ironies, bon mots and observations in Barnes' entirely unflappable prose style, along with the satire implicit in him writing this moreish pseudo-biography to begin with. The opening page, with its steadfast refusal to even choose where to begin, is somewhat characteristic of Barnes' method, so if you don't enjoy the first few pages then you are unlikely to like the rest. (Indeed, the whole enterprise may be something of an acquired taste. Like Campari.) For me, though, I was left wryly grinning and often couldn't wait to turn the page. Indeed, at times it reminded me of a being at a dinner party with an extremely charming guest at the very peak of his form as a wit and raconteur, delighting the party with his rambling yet well-informed discursive on his topic de jour. A significant book, and a book of significance.
Publisher: | Alfred A. Knopf |
Copyright: | 2021 |
ISBN: | 0-593-32010-7 |
Format: | Kindle |
Pages: | 260 |
You were, quite literally, doing your job from home. But you weren't working from home. You were laboring in confinement and under duress. Others have described it as living at work. You were frantically tapping out an email while trying to make lunch and supervise distance learning. You were stuck alone in a cramped apartment for weeks, unable to see friends or family, exhausted, and managing a level of stress you didn't know was possible. Work became life, and life became work. You weren't thriving. You were surviving.The stated goal of this book is to reclaim the concept of working from home, not only from the pandemic, but also from the boundary-destroying metastasis of work into non-work life. It does work towards that goal, but the description of what would be required for working from home to live up to its promise becomes a sweeping critique of the organization and conception of work, leaving it nearly as applicable to those who continue working from an office. Turns out that the main problem with working from home is the work part, not the "from home" part. This was a fascinating book to read in conjunction with A World Without Email. Warzel and Petersen do the the structural and political analysis that I sometimes wish Newport would do more of, but as a result offer less concrete advice. Both, however, have similar diagnoses of the core problems of the sort of modern office work that could be done from home: it's poorly organized, poorly managed, and desperately inefficient. Rather than attempting to fix those problems, which is difficult, structural, and requires thought and institutional cooperation, we're compensating by working more. This both doesn't work and isn't sustainable. Newport has a background in productivity books and a love of systems and protocols, so his focus in A World Without Email is on building better systems of communication and organization of work. Warzel and Petersen come from a background of reporting and cultural critique, so they put more focus on power imbalances and power-serving myths about the American dream. Where Newport sees an easy-to-deploy ad hoc work style that isn't fit for purpose, Warzel and Petersen are more willing to point out intentional exploitation of workers in the guise of flexibility. But they arrive at some similar conclusions. The way office work is organized is not leading to more productivity. Tools like Slack encourage the public performance of apparent productivity at the cost of the attention and focus required to do meaningful work. And the process is making us miserable. Out of Office is, in part, a discussion of what would be required to do better work with less stress, but it also shares a goal with Newport and some (but not most) corners of productivity writing: spend less time and energy on work. The goal of Out of Office is not to get more work done. It's to work more efficiently and sustainably and thus work less. To reclaim the promise of flexibility so that it benefits the employee and not the employer. To recognize, in the authors' words, that the office can be a bully, locking people in to commute schedules and unnatural work patterns, although it also provides valuable moments of spontaneous human connection. Out of Office tries to envision a style of work that includes the office sometimes, home sometimes, time during the day to attend to personal chores or simply to take a mental break from an unnatural eight hours (or more) of continuous focus, universal design, real worker-centric flexibility, and an end to the constant productivity ratchet where faster work simply means more work for the same pay. That's a lot of topics for a short book, and structurally this is a grab bag. Some sections will land and some won't. Loom's video messages sound like a nightmare to me, and I rolled my eyes heavily at the VR boosterism, reluctant as it may be. The section on DEI (diversity, equity, and inclusion) was a valiant effort that at least gestures towards the dismal track record of most such efforts, but still left me unconvinced that anyone knows how to improve diversity in an existing organization without far more brute-force approaches than anyone with power is usually willing to consider. But there's enough here, and the authors move through topics quickly enough, that a section that isn't working for you will soon be over. And some of the sections that do work are great. For example, the whole discussion of management.
Many of these companies view middle management as bloat, waste, what David Graeber would call a "bullshit job." But that's because bad management is a waste; you're paying someone more money to essentially annoy everyone around them. And the more people experience that sort of bad management, and think of it as "just the way it is," the less they're going to value management in general.I admit to a lot of confirmation bias here, since I've been ranting about this for years, but management must be the most wide-spread professional job for which we ignore both training and capability and assume that anyone who can do any type of useful work can also manage people doing that work. It's simply not true, it creates workplaces full of horrible management, and that in turn creates a deep and unhelpful cynicism about all management. There is still a tendency on the left to frame this problem in terms of class struggle, on the reasonable grounds that for decades under "scientific management" of manufacturing that's what it was. Managers were there to overwork workers and extract more profits for the owners, and labor unions were there to fight back against managers. But while some of this does happen in the sort of office work this book is focused on, I think Warzel and Petersen correctly point to a different cause.
"The reason she was underpaid on the team was not because her boss was cackling in the corner. It was because nobody told the boss it was their responsibility to look at the fucking spreadsheet."We don't train managers, we have no clear expectations for what managers should do, we don't meaningfully measure their performance, we accept a high-overhead and high-chaos workstyle based on ad hoc one-to-one communication that de-emphasizes management, and many managers have never seen good management and therefore have no idea what they're supposed to be doing. The management problem for many office workers is less malicious management than incompetent management, or simply no effective management at all apart from an occasional reorg and a complicated and mind-numbing annual review form. The last section of this book (apart from concluding letters to bosses and workers) is on community, and more specifically on extracting time and energy from work (via the roadmap in previous chapters) and instead investing it in the people around you. Much ink has been spilled about the collapse of American civic life, about how we went from a nation of joiners to a nation of isolated individual workers with weak and failing community institutions. Warzel and Petersen correctly lay some blame for this at the foot of work, and see the reorganization of work and an increase in work from home (and thus a decrease in commutes) as an opportunity to reverse that trend. David Brooks recently filled in for Ezra Klein on his podcast and talked with University of Chicago professor Leon Kass, which I listened to shortly after reading this book. In one segment, they talked about marriage and complained about the decline in marriage rates. They were looking for causes in people's moral upbringing, in their life priorities, in the lack of aspiration for permanence in kids these days, and in any other personal or moral failing that would allow them to be smugly judgmental. It was a truly remarkable thing to witness. Neither man at any point in the conversation mentioned either money or time. Back in the world most Americans live in, real wages have been stagnant for decades, student loan debt is skyrocketing as people desperately try to keep up with the ever-shifting requirements for a halfway-decent job, and work has expanded to fill all hours of the day, even for people who don't have to work multiple jobs to make ends meet. Employers have fully embraced a "flexible" workforce via layoffs, micro-optimizing work scheduling, eliminating benefits, relying on contract and gig labor, and embracing exceptional levels of employee turnover. The American worker has far less of money, time, and stability, three important foundations for marriage and family as well as participation in most other civic institutions. People like Brooks and Kass stubbornly cling to their feelings of moral superiority instead of seeing a resource crisis. Work has stolen the resources that people previously put into those other areas of their life. And it's not even using those resources effectively. That's, in a way, a restatement of the topic of this book. Our current way of organizing work is not sustainable, healthy, or wise. Working from home may be part of a strategy for changing it. The pandemic has already heavily disrupted work, and some of those changes, including increased working from home, seem likely to stick. That provides a narrow opportunity to renegotiate our arrangement with work and try to make those changes stick. I largely agree with the analysis, but I'm pessimistic. I think the authors are as well. We're very bad at social change, and there will be immense pressure for everything to go "back to normal." Those in the best bargaining position to renegotiate work for themselves are not in the habit of sharing that renegotiation with anyone else. But I'm somewhat heartened by how much public discussion there currently is about a more fundamental renegotiation of the rules of office work. I'm also reminded of a deceptively profound aphorism from economist Herbert Stein: "If something cannot go on forever, it will stop." This book is a bit uneven and is more of a collection of related thoughts than a cohesive argument, but if you are hungry for more worker-centric analyses of the dynamics of office work (inside or outside the office), I think it's worth reading. Rating: 7 out of 10
TL;DR: Science simply does not support binary sexes or binary genders. Truth is a bit more complicated.There is certainty and there are binary answers in mathematics. Things get less definitive in physics, certainly as soon as quantum is broached. Processes become more of an equilibrium between states in chemistry, never wholly one or the other. Yes, there is the oddity of absolute zero but no experiment has yet achieved that fully. It is accurate to describe physics as a development of applied mathematics and to view chemistry as applied physics. Biology, at the biochemical level, is applied chemistry. The sciences build on each other, "on the shoulders of giants", but at each level, some certainty is lost, some amount of uncertainty is expanded and measurements become probabilities, proportions and percentages. Biology is dependent on biochemistry - chemistry is how a biological change results in a different organism. Physics is how that chemical change occurs - temperature, pressure and physical states are inherent to all chemical changes. Outside laboratory constraints, few chemical reactions, especially in organic chemistry, produce one and only one result from two or more known reagents. In biology, everyone is familiar with genetic mutations but a genetic mutation only happens because a biochemical reaction (hydrogen bonding of nucleobases) does not always produce the expected result. Every cell division, every viral infection, there is a finite probability that a change will occur. It might be a small number but it is never zero and can never be dismissed. This is obvious in the current Covid pandemic - genetic mutations result in new variants. Some variants are inviable, some variants produce no net change in the way that the viral particles infect adjacent cells. Sometimes, a mutation happens that changes everything. These mutations are not mistakes - these are simply changes with undetermined outcomes. Genetic changes are the foundation of biodiversity and variety is what allows lifeforms of all kinds to survive changes in environmental factors and/or changes in prevalent diseases. It is precisely the same in humans, particularly in one of the principle spheres of human life that involves replicating genetic material - the creation of gametes for sexual reproduction. Every single time any DNA is copied, there is a finite chance that a different base will be put in place compared to the original. Copying genetic material is therefore non-binary. Given precisely the same initial conditions, the result is not always predictable and the range of how the results vary from one to another increases with every iteration. Let me stress that - at the molecular level, no genetic operation in any biological lifeform has a truly binary result. Repeat that operation sufficiently often and an unexpected result WILL inevitably occur. It is a mathematical certainty that genetic changes will arise by attempting precisely the same genetic operation enough times. Genetic changes are fundamental to how lifeforms survive changing conditions. Life would likely have died out a long time ago on this planet if every genetic operation was perfect. Diversity is life. Similarity leads to extinction. Viral load is interesting at this point. Someone can be infected with a virus, including coronavirus, by encountering a small number of viral particles. Some viruses, it may be a few hundred, some viruses may need a few thousand particles to infect a vulnerable host. But here's the thing, for that host to be at risk of infecting another host, the virus needs the host to produce billions upon billions of copies of the virus by taking over the genetic machinery within a huge number of cells in the host. This, as is accepted with Covid, is before the virus has been copied enough times to produce symptoms in the host. Before those symptoms become serious, billions more copies will be made. The numbers become unimaginable - and that is within a single host, let alone the 265 million (and counting) hosts in the current Covid19 pandemic. It's also no wonder that viral infections cause tiredness, the infection is diverting huge resources to propagating itself - before even considering the activity of the immune system. It is idiocy of the highest order to expect all those copies to be identical. The rise of variants is inevitable - indeed essential - in all spheres of biology. A single viral particle is absolutely no threat of any kind - it must first get inside and then copy the genetic information in a host cell. This is where the complexity lies in the definition of life itself. A virus can be considered a lifeform but it is only able to reproduce using another, more complex, lifeform. In truth, a viral particle does not and cannot mutate. The infected host mutates the virus. The longer it takes that host to clear the infection, the more mutations that host will create and then potentially spread to others. Now apply this to the creation of gametes in humans. With seven billion humans, the amount of copying of genetic material is not as large as the pandemic but it is still easy for everyone to understand that children do not merely combine the DNA of both parents. Changes happen. Human sexual reproduction is not as simple as 1 + 1 = 2. Sometimes, the copying of the genetic material produces an unexpected result. Sexual reproduction itself is non-binary. Sexual reproduction is not easy or simple for lifeforms to adopt - the diversity which results from the non-binary operations are exactly why so many lifeforms invest so much energy in reproducing in this way. Whilst many genetic changes in humans will be benign or beneficial, I d like to take an example of a genetic disorder that results from the non-binary nature of sex. Humans can be born with the XY phenotype - i.e. at a genetic level, the individual has the same combination of chromosomes as another XY individual but there are changes within the genes in those chromosomes. We accept this, some children of blonde parents do not have blonde hair, etc. There are also genetic changes where an XY phenotype is not binary. Some people, who at a genetic level would be almost identical to another person who is genetically male, have a genetic mutation which makes it impossible for the cells of that individual to respond to androgens (testosterone). (See Androgen insensitivity syndrome). Genetically, that individual has an X and a Y chromosome, just like many other individuals. However, due to a change in how the genes on those chromosomes were copied, that individual is biologically incapable of constructing the secondary sexual characteristics of a male. At a genetic level, the individual has the XY phenotype of a male. At the physical level, the individual has all the sexual characteristics of a female and none of the sexual characteristics of a male. The gender of that individual is not binary. Treatment is centred on supporting the individual and minimising some risks from the inactive genes on the Y chromosome. Human sexual reproduction is non-binary. The results of any sexual reproduction in humans will not always produce the binary option of male or female. It is a lie to claim that human gender is binary. The science is in plain view and cannot be ignored. Identifying as non-binary is not a "cop out" - it can be a biological, genetic, scientific fact. Human sexuality and gender are malleable. Where genetic changes result in symptoms, these can be ameliorated by treatment with human sex hormones, like oestrogen and testosterone. There are valid medical uses for anabolic steroids and hormone replacement therapies to help individuals who, at a genetic level, have non-binary gender. These treatments can help align the physical outer signs with the personality and identity of the individual, whether with or without surgery. It is unacceptable to abandon such people to suffer life long discrimination and harassment by imposing a binary definition that has no basis in science. When a human being has an XY phenotype, that human being is not necessarily male. That individual will be on a spectrum from female (left unaffected by sex hormones in the womb, the foetus will be female, even with an X and a Y chromosome), to various degrees of male. So, at a genetic, biological level, it is a scientific fact that human beings do not have binary gender. There is no evidence that this is new to the modern era, there is no scientific basis for thinking that copying of genetic material was somehow perfectly reliable in earlier history, or that such mutations are specific to homo sapiens. Changes in genetic material provide the diversity to fight infections and adapt to changing environmental factors. Species have and will continue to go extinct if this diversity is absent. With that out of the way, it is no longer a stretch to encompass other aspects of human non-binary genders beyond the known genetic syndromes based on changes in the XY phenotype. Science has not uncovered all of the ways that genes affect personality, behaviour, or identity. How other, less studied, genetic changes affect the much more subtle human facets, especially anything to do with consciousness, identity, personality, sexuality and behaviour, is guesswork. All of these facets can and likely are being affected by genetic factors as well as environmental factors in an endless range of permutations. Personality traits are a beautiful and largely unknowable blend of genes and environment. Genetic information has a finite probability of changes at each and every iteration. Environmental factors are more akin to chaos theory. The idea that the results will fit into binary constructs is laughable. Human society puts huge emphasis on societal norms. Individuals who do not fit into those norms suffer discrimination. The norms themselves have evolved over time as a response to various influences on human civilisation but most are not based on science. It is up to all humans in that society to call out discrimination, to call for changes in the accepted norms and support those who are marginalised. It is a precarious balance, one that humans rarely get right, but it must be based on an acceptance that variation is the natural state. Artificial constraints, like binary genders, must be dismantled because human beings and human sexual reproduction are not binary. To those who think, "well it is for 99%", think again about Covid. 99% (or closer to 98%) of infected humans recover without notable after effects. That has still crippled the nations of the globe and humbled all those who tried to deny it. Five million human beings are dead because "most infected people recover". Just because something only affects a proportion of human beings does not invalidate the suffering of those humans and the discrimination that those humans will face. Societal norms are not necessarily correct. Religious and other influences typically obscure and ignore scientific fact and undermine human kindness. The scientific truth of life on this planet is that gender is not binary. The more complex the lifeform, the more factors will affect where on the spectrum any one individual will appear. Just because we do not yet fully understand how genes affect human personality and sexuality, does not invalidate the science that variation is the natural order. My previous blog about diversity is not just about male vs female, one nationality vs another, one ethnicity compared to another. Diversity is diverse. Diversity requires accepting that every facet of humanity is subject to variation. That leads to tension at times, it is inevitable. Tension against societal norms, tension against discrimination, tension around those individuals who would abuse the tolerance of others for their own gratification or from their own ignorance. None of us are perfect, none of us have any of this fully sorted and all of us will make mistakes. Personally, I try to respect those around me. I will use whatever pronouns and other conventions that the person requests, from their perspective and not mine. To do otherwise is to deny the natural order and to deny the science. Celebrate all diversity, it is the very stuff of life.
launchpadlib
, which were ported
years ago). As such, we weren t trying to do this with the internet having
Strong Opinions at us. We were doing this because it was obviously the only
long-term-maintainable path forward, and in more recent times because some
of our library dependencies were starting to drop support for Python 2 and
so it was obviously going to become a practical problem for us sooner or
later; but if we d just stayed on Python 2 forever then fundamentally hardly
anyone else would really have cared directly, only maybe about some indirect
consequences of that. I don t follow Mercurial development so I may be
entirely off-base, but if other people were yelling at me about how late my
project was to finish its port, that in itself would make me feel more
negatively about the project even if I thought it was a good idea. Having
most of the pressure come from ourselves rather than from outside meant that
wasn t an issue for us.
I m somewhat inclined to think of the process as an extreme version of
paying down technical debt. Moving from Python 2.7 to 3.5, as we just did,
means skipping over multiple language versions in one go, and if similar
changes had been made more gradually it would probably have felt a lot more
like the typical dependency update treadmill. I appreciate why not everyone
might want to think of it this way: maybe this is just my own rationalization.
Reflections on porting to Python 3
I m not going to defend the Python 3 migration process; it was pretty rough
in a lot of ways. Nor am I going to spend much effort relitigating it here,
as it s already been done to death elsewhere, and as I understand it the
core Python developers have got the message loud and clear by now. At a
bare minimum, a lot of valuable time was lost early in Python 3 s lifetime
hanging on to flag-day-type porting strategies that were impractical for
large projects, when it should have been providing for bilingual
strategies (code that runs in both Python 2 and 3 for a transitional period)
which is where most libraries and most large migrations ended up in
practice. For instance, the early advice to library maintainers to maintain
two parallel versions or perhaps translate dynamically with 2to3
was
entirely impractical in most non-trivial cases and wasn t what most people
ended up doing, and yet the idea that 2to3
is all you need still floats
around Stack Overflow and the like as a result. (These days, I would
probably point people towards something more like Eevee s porting
FAQ
as somewhere to start.)
There are various fairly straightforward things that people often suggest
could have been done to smooth the path, and I largely agree: not removing
the u''
string prefix only to put it back in 3.3, fewer gratuitous
compatibility breaks in the name of tidiness, and so on. But if I had a
time machine, the number one thing I would ask to have been done differently
would be introducing type annotations in Python 2 before Python 3 branched
off. It s true that it s technically
possible
to do type annotations in Python 2, but the fact that it s a different
syntax that would have to be fixed later is offputting, and in practice it
wasn t widely used in Python 2 code. To make a significant difference to
the ease of porting, annotations would need to have been introduced early
enough that lots of Python 2 library code used them so that porting code
didn t have to be quite so much of an exercise of manually figuring out the
exact nature of string types from context.
Launchpad is a complex piece of software that interacts with multiple
domains: for example, it deals with a database, HTTP, web page rendering,
Debian-format archive publishing, and multiple revision control systems, and
there s often overlap between domains. Each of these tends to imply
different kinds of string handling. Web page rendering is normally done
mainly in Unicode, converting to bytes as late as possible; revision control
systems normally want to spend most of their time working with bytes,
although the exact details vary; HTTP is of course bytes on the wire, but
Python s WSGI interface has some string type
subtleties.
In practice I found myself thinking about at least four string-like types
(that is, things that in a language with a stricter type system I might well
want to define as distinct types and restrict conversion between them):
bytes, text, ordinary native strings (str
in either language, encoded to
UTF-8 in Python 2), and native strings with WSGI s encoding rules. Some of
these are emergent properties of writing in the intersection of Python 2 and
3, which is effectively a specialized language of its own without coherent
official documentation whose users must intuit its behaviour by comparing
multiple sources of information, or by referring to unofficial porting
guides: not a very satisfactory situation. Fortunately much of the
complexity collapses once it becomes possible to write solely in Python 3.
Some of the difficulties we ran into are not ones that are typically thought
of as Python 2-to-3 porting issues, because they were changed later in
Python 3 s development process. For instance, the email
module was
substantially improved in around the 3.2/3.3 timeframe to handle Python 3 s
bytes/text model more correctly, and since Launchpad sends quite a few
different kinds of email messages and has some quite picky tests for exactly
what it emits, this entailed a lot of work in our email sending code and in
our test suite to account for that. (It took me a while to work out whether
we should be treating raw email messages as bytes or as text; bytes turned
out to work best.) 3.4 made some tweaks to the implementation of
quoted-printable encoding that broke a number of our tests in ways that took
some effort to fix, because the tests needed to work on both 2.7 and 3.5.
The list goes on. I got quite proficient at digging through Python s git
history to figure out when and why some particular bit of behaviour had changed.
One of the thorniest problems was parsing HTTP form data. We mainly rely on
zope.publisher
for this, which
in turn relied on
cgi.FieldStorage
; but
cgi.FieldStorage
is badly broken in some
situations on Python 3. Even if that
bug were fixed in a more recent version of Python, we can t easily use
anything newer than 3.5 for the first stage of our port due to the version
of the base OS we re currently running, so it wouldn t help much. In the
end I fixed some minor issues in the
multipart
module (and was kindly
given co-maintenance of it) and converted zope.publisher
to use
it. Although
this took a while to sort out, it seems to have gone very well.
A couple of other interesting late-arriving issues were around
pickle
. For most things
we normally prefer safer formats such as JSON, but there are a few cases
where we use pickle, particularly for our session databases. One of my
colleagues pointed out that I needed to remember to tell pickle
to stick
to protocol
2,
so that we d be able to switch back and forward between Python 2 and 3 for a
while; quite right, and we later ran into a similar problem with
marshal
too. A more
surprising problem was that datetime.datetime
objects pickled on Python 2
require special care when unpickling
on Python 3; rather than the approach that ended up being implemented and
documented
for Python 3.6, though, I preferred a custom
unpickler,
both so that things would work on Python 3.5 and so that I wouldn t have to
risk affecting the decoding of other pickled strings in the session database.
General lessons
Writing this over a year after Python 2 s end-of-life date, and certainly
nowhere near the leading edge of Python 3 porting work, it s perhaps more
useful to look at this in terms of the lessons it has for other large
technical debt projects.
I mentioned in my previous article that
I used the approach of an enormous and frequently-rebased git branch as a
working area for the port, committing often and sometimes combining and
extracting commits for review once they seemed to be ready. A port of this
scale would have been entirely intractable without a tool of similar power
to git rebase
, so I m very glad that we finished migrating to git in 2019.
I relied on this right up to the end of the port, and it also allowed for
quick assessments of how much more there was to land. git
worktree was also helpful, in that I
could easily maintain working trees built for each of Python 2 and 3 for comparison.
As is usual for most multi-developer projects, all changes to Launchpad need
to go through code review, although we sometimes make exceptions for very
simple and obvious changes that can be self-reviewed. Since I knew from the
outset that this was going to generate a lot of changes for review, I
therefore structured my work from the outset to try to make it as easy as
possible for my colleagues to review it. This generally involved keeping
most changes to a somewhat manageable size of 800 lines or less (although
this wasn t always possible), and arranging commits mainly according to the
kind of change they made rather than their location. For example, when I
needed to fix issues with /
in Python 3 being true division rather than
floor division, I did so in one
commit
across the various places where it mattered and took care not to mix it with
other unrelated changes. This is good practice for nearly any kind of
development, but it was especially important here since it allowed reviewers
to consider a clear explanation of what I was doing in the commit message
and then skim-read the rest of it much more quickly.
It was vital to keep the codebase in a working state at all times, and
deploy to production reasonably often: this way if something went wrong the
amount of code we had to debug to figure out what had happened was always
tractable. (Although I can t seem to find it now to link to it, I saw an
account a while back of a company that had taken a flag-day approach instead
with a large codebase. It seemed to work for them, but I m certain we
couldn t have made it work for Launchpad.)
I can t speak too highly of Launchpad s test suite, much of which originated
before my time. Without a great deal of extensive coverage of all sorts of
interesting edge cases at both the unit and functional level, and a
corresponding culture of maintaining that test suite well when making new
changes, it would have been impossible to be anything like as confident of
the port as we were.
As part of the porting work, we split out a couple of substantial chunks of
the Launchpad codebase that could easily be decoupled from the core: its
Mailman integration and its code import
worker. Both of these had substantial
dependencies with complex requirements for porting to Python 3, and
arranging to be able to do these separately on their own schedule was
absolutely worth it. Like disentangling balls of wool, any opportunity you
can take to make things less tightly-coupled is probably going to make it
easier to disentangle the rest. (I can see a tractable way forward to
porting the code import worker, so we may well get that done soon. Our
Mailman integration will need to be rewritten, though, since it currently
depends on the Python-2-only Mailman 2, and Mailman 3 has a different architecture.)
Python lessons
Our database layer was already in pretty good
shape for a port, since at least the modern bits of its table modelling
interface were already strict about using Unicode for text columns. If you
have any kind of pervasive low-level framework like this, then making it be
pedantic at you in advance of a Python 3 port will probably incur much less
swearing in the long run, as you won t be trying to deal with quite so many
bytes/text issues at the same time as everything else.
Early in our port, we established a standard set of
__future__
imports
and started incrementally converting files over to them, mainly because we
weren t yet sure what else to do and it seemed likely to be helpful.
absolute_import
was definitely reasonable (and not often a problem in our
code), and print_function
was annoying but necessary. In hindsight I m
not sure about unicode_literals
, though. For files that only deal with
bytes and text it was reasonable enough, but as I mentioned above there were
also a number of cases where we needed literals of the language s native
str
type, i.e. bytes in Python 2 and text in Python 3: this was
particularly noticeable in WSGI contexts, but also cropped up in some other
surprising
places. We
generally either omitted unicode_literals
or used six.ensure_str
in such
cases, but it was definitely a bit awkward and maybe I should have listened
more to people telling me it might be a bad idea.
A lot of Launchpad s early tests used
doctest, mainly in the
style
where you have text files that interleave narrative commentary with
examples. The development team later reached consensus that this was best
avoided in most cases, but by then there were far too many doctests to
conveniently rewrite in some other form. Porting doctests to Python 3 is
really annoying. You run into all the little changes in how objects are
represented as text (particularly u'...'
versus '...'
, but plenty of
other cases as well); you have next to no tools to do anything useful like
skipping individual bits of a doctest that don t apply; using __future__
imports requires the rather obscure approach of adding the relevant names to
the doctest s globals in the relevant DocFileSuite
or DocTestSuite
;
dealing with many exception tracebacks requires something like
zope.testing.renormalizing
;
and whatever code refactoring tools you re using probably don t work
properly. Basically, don t have done that. It did all turn out to be
tractable for us in the end, and I managed to avoid using much in the way of
fragile doctest extensions aside from the aforementioned
zope.testing.renormalizing
, but it was not an enjoyable experience.
Regressions
I know of nine regressions that reached Launchpad s production systems as a
result of this porting work; of course there were various other regressions
caught by CI or in manual testing. (Considering the size of this project, I
count it as a resounding success that there were only nine production
issues, and that for the most part we were able to fix them quickly.)
Equality testing of removed database objects
One of the things we had to do while porting to Python 3 was to
implement
the __eq__
, __ne__
, and __hash__
special methods for all our database
objects. This was quite conceptually fiddly, because doing this requires
knowing each object s primary key, and that may not yet be available if
we ve created an object in Python but not yet flushed the actual INSERT
statement to the database (most of our primary keys are auto-incrementing
sequences). We thus had to take care to flush pending SQL statements in
such cases in order to ensure that we know the primary keys.
However, it s possible to have a problem at the other end of the object
lifecycle: that is, a Python object might still be reachable in memory even
though the underlying row has been DELETE
d from the database. In most
cases we don t keep removed objects around for obvious reasons, but it can
happen in caching code, and buildd-manager
crashed as a result (in
fact while it was still running on Python 2). We had to take extra
care
to avoid this problem.
Debian imports crashed on non-UTF-8 filenames
Python 2 has some unfortunate
behaviour around passing
bytes or Unicode strings (depending on the platform) to shutil.rmtree
, and
the combination of some porting
work
and a particular source package in Debian that contained a non-UTF-8 file
name caused us to run into this. The
fix
was to ensure that the argument passed to shutil.rmtree
is a str
regardless of Python version.
We d actually run into something
similar
before: it s a subtle porting gotcha, since it s quite easy to end up
passing Unicode strings to shutil.rmtree
if you re in the process of
porting your code to Python 3, and you might easily not notice if the file
names in your tests are all encoded using UTF-8.
lazr.restful ETags
We eventually got far enough along that we could switch one of our four
appserver machines (we have quite a number of other machines too, but the
appservers handle web and API requests) to Python 3 and see what happened.
By this point our extensive test suite had shaken out the vast majority of
the things that could go wrong, but there was always going to be room for
some interesting edge cases.
One of the Ubuntu kernel team reported that they were seeing an increase in
412 Precondition
Failed errors in some
of their scripts that use our webservice API. These can happen when you re
trying to modify an existing resource: the underlying protocol involves
sending an If-Match
header with the ETag
that the client thinks the
resource has, and if this doesn t match the ETag
that the server calculates
for the resource then the client has to refresh its copy of the resource and
try again. We initially thought that this might be legitimate since it can
happen in normal operation if you collide with another client making changes
to the same resource, but it soon became clear that something stranger was
going on: we were getting inconsistent ETag
s for the same object even when
it was unchanged. Since we d recently switched a quarter of our appservers
to Python 3, that was a natural suspect.
Our lazr.restful
package provides the framework for our webservice API,
and roughly speaking it generates ETag
s by serializing objects into some
kind of canonical form and hashing the result. Unfortunately the
serialization was dependent on the Python version in a few ways, and in
particular it serialized lists of strings such as lists of bug tags
differently: Python 2 used [u'foo', u'bar', u'baz']
where Python 3 used
['foo', 'bar', 'baz']
. In lazr.restful
1.0.3 we switched to using
JSON
for this, removing the Python version dependency and ensuring consistent
behaviour between appservers.
Memory leaks
This problem took the longest to solve. We noticed fairly quickly from our
graphs that the appserver machine we d switched to Python 3 had a serious
memory leak. Our appservers had always been a bit leaky, but now it wasn t
so much a small hole that we can bail occasionally as the boat is sinking rapidly :
(Yes, this got in the way of working out what was going on with ETag
s for
a while.)
I spent ages messing around with various attempts to fix this. Since only
a quarter of our appservers were affected, and we could get by on 75%
capacity for a while, it wasn t urgent but it was definitely annoying.
After spending some quality time with
objgraph, for
some time I thought traceback reference
cycles
might be at fault, and I sent a number of fixes to various upstream projects
for those (e.g.
zope.pagetemplate).
Those didn t help the leaks much though, and after a while it became clear
to me that this couldn t be the sole problem: Python has a cyclic garbage
collector that will eventually collect reference cycles as long as there are
no strong references to any objects in them, although it might not happen
very quickly. Something else must be going on.
Debugging reference leaks in any non-trivial and long-running Python program
is extremely arduous, especially with ORMs that naturally tend to end up
with lots of cycles and caches. After a while I formed a hypothesis that
zope.server might be keeping a
strong reference to something, although I never managed to nail it down more
firmly than that. This was an attractive theory as we were already in the
process of migrating to Gunicorn for
other reasons anyway, and Gunicorn also has a convenient
max_requests
setting that s good at mitigating memory leaks. Getting this all in place
took some time, but once we did we found that everything was much more stable:
This isn t completely satisfying as we never quite got to the bottom of the
leak itself, and it s entirely possible that we ve only papered over it
using max_requests
: I expect we ll gradually back off on how frequently we
restart workers over time to try to track this down. However,
pragmatically, it s no longer an operational concern.
Mirror prober HTTPS proxy handling
After we switched our script servers to Python 3, we had several reports of
mirror probing
failures. (Launchpad
keeps lists of Ubuntu archive and image mirrors, and probes them every so
often to check that they re reasonably complete and up to date.) This only
affected HTTPS mirrors when probed via a proxy server, support for which is
a relatively recent feature in Launchpad and involved some code that we
never managed to unit-test properly: of course this is exactly the code that
went wrong. Sadly I wasn t able to sort out that gap, but at least the
fix
was simple.
Non-MIME-encoded email headers
As I mentioned above, there were substantial changes in the email
package
between Python 2 and 3, and indeed between minor versions of Python 3. Our
test coverage here is pretty good, but it s an area where it s very easy to
have gaps. We noticed that a script that processes incoming email was
crashing on messages with headers that were non-ASCII but not
MIME-encoded (and
indeed then crashing again when it tried to send a notification of the
crash!). The only examples of these I looked at were spam, but we still
didn t want to crash on them.
The
fix
involved being somewhat more careful about both the handling of headers
returned by Python s email parser and the building of outgoing email
notifications. This seems to be working well so far, although I wouldn t be
surprised to find the odd other incorrect detail in this sort of area.
Failure to handle non-ISO-8859-1 URL-encoded form input
Remember how I said that parsing HTTP form data was thorny? After we
finished upgrading all our appservers to Python 3, people started reporting
that they couldn t post Unicode comments to
bugs, which turned out
to be only if the attempt was made using JavaScript, and was because I
hadn t quite managed to get URL-encoded form data working properly with
zope.publisher
and multipart
. The current standard describes the
URL-encoded format for form data as in many ways an aberrant
monstrosity ,
so this was no great surprise.
Part of the problem was some very strange
choices in
zope.publisher
dating back to 2004 or earlier, which I attempted to clean
up and simplify.
The rest was that Python 2 s urlparse.parse_qs
unconditionally decodes
percent-encoded sequences as ISO-8859-1 if they re passed in as part of a
Unicode string, so multipart
needs to work around
this on Python 2.
I m still not completely confident that this is correct in all situations,
but at least now that we re on Python 3 everywhere the matrix of cases we
need to care about is smaller.
Inconsistent marshalling of Loggerhead s disk cache
We use Loggerhead for providing web
browsing of Bazaar branches. When we upgraded one of its two servers to
Python 3, we immediately noticed that the one still on Python 2 was failing
to read back its revision information cache, which it stores in a database
on disk. (We noticed this because it caused a deployment to fail: when we
tried to roll out new code to the instance still on Python 2, Nagios checks
had already caused an incompatible cache to be written for one branch from
the Python 3 instance.)
This turned out to be a similar problem to the pickle
issue mentioned
above, except this one was with marshal
, which I didn t think to look for
because it s a relatively obscure module mostly used for internal purposes
by Python itself; I m not sure that Loggerhead should really be using it in
the first place. The fix was
relatively
straightforward,
complicated mainly by now needing to cope with throwing away unreadable
cache data.
Ironically, if we d just gone ahead and taken the nominally riskier path of
upgrading both servers at the same time, we might never have had a problem here.
Intermittent bzr failures
Finally, after we upgraded one of our two Bazaar codehosting servers to
Python 3, we had a
report of intermittent
bzr branch
hangs. After some digging I found this in our logs:
Traceback (most recent call last):
...
File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/conch/ssh/channel.py", line 136, in addWindowBytes
self.startWriting()
File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/lazr/sshserver/session.py", line 88, in startWriting
resumeProducing()
File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/internet/process.py", line 894, in resumeProducing
for p in self.pipes.itervalues():
builtins.AttributeError: 'dict' object has no attribute 'itervalues'
Next.